Road to Responsible AI: Leidos’ Ron Keesing on the positive role the technology can have in government and business

This content is sponsored by Leidos.

Although many people may have only begun to learn about artificial intelligence and its applications relatively recently, Leidos has been working on AI solutions for the federal government for decades.

The company was part of the first Defense Advanced Research Projects Agency Grand Challenge in 2004, a seminal event involving the development of self-driving vehicles.

Ron Keesing, chief AI officer for Leidos, also noted that in the 2010s, the company built the first generation of autonomous vessels for the U.S. Navy.

“In general, we look to apply AI across many of the government’s most important missions and vexing challenges,” Keesing said during WTOP’s Road to Responsible AI.

He pointed out that the work Leidos does for the government involves processing massive amounts of data. “We operate the third largest information network on the planet, for the Defense Department,” he noted.

Other work includes modernizing health care record systems for DOD, helping to deliver care for the Department of Veterans Affairs, as well as managing air traffic control systems on behalf of the Federal Aviation Administration.

“All of these involve massive amounts of data, and there are great opportunities to apply AI to help make things better,” he said.

Building trust with federal agencies

Members of Congress tackling issues involving AI have said that it’s important to understand the sources of data and information used in the technology’s algorithms, as well as to basically understand and trust in what’s being done.

Keesing said Leidos agrees. “As a company that’s been building AI for a long time, we’ve always harped on the centrality of trust — that humans actually trust the AI and what it does,” he said.

He cited as an example the work that Leidos does with FAA.

“The FAA leadership has to trust that we’re going to help them fulfill their mission and responsibilities,” he said. “Transportation Security Administration agents on the ground have to trust that the system is actually finding what they need to find, and the flying public has got to trust that the system is both keeping them safe and treating them fairly.”

Keesing also said transparency is important when it comes to public safety and issues affecting peoples’ rights.

“It’s really important that we build levels of transparency and accountability — and auditability — into the way that our AI systems work, so that if something goes wrong, we can actually trace it back and make sure it doesn’t happen again,” he said.

Protecting privacy is necessary too, though Keesing noted it can be tricky.

A private citizen may want to know about data involving them in some type of process. But if the citizen ultimately wants that data to be forgotten, that gets complicated in the world of AI.

It could become “enormously expensive” to effectively have to retrain a model when information is omitted.

“It’s important, as we think about this, we figure out and the government … takes the right, appropriate and balanced approach on how to do this in a manner that’s practical, while still representing the desires we all have as citizens,” he said.

Regulating AI

That’s why Keesing said his company believes it’s important for lawmakers and the federal government to be proactive in helping to establish guidelines for AI.

There’s currently a “patchwork of legislation” in various states, which he said is “a real regulatory burden” for technology companies, he said.

“We have to figure out how to operate under potentially dozens of different privacy regimes, which is just not tenable,” Keesing said.

“We would love to see the federal government take action here and provide some leadership and uniform standards around data privacy that apply everywhere,” he added. “That’ll be to everyone’s benefit and really free up a level of innovation that’s hard to do today when everyone’s playing by their own rules.”

The congressional AI Task Force is working on a report that seeks to begin establishing a framework. Also, lawmakers are considering more than a dozen bills that are incremental in how they regulate AI. (Learn more about the task force from our interview with its co-chair, Rep. Jay Obernolte.)

Those involved with the task force say they are optimistic about the progress that they’ve made working on various issues this year.

Partnerships between AI and people

Keesing said he shares that optimism about the future of AI. He points to an example of how AI has improved things within Leidos itself.

For instance, AI has proven particularly good at helping people write computer code.

“What’s interesting is, when you pair humans and machines together to do this, you might think, ‘Well, the AI just starts writing the code for the humans,’ ” he said. “When they’ve got an AI partner next to them, they spend more time coding, but they spend less time on the parts of the job that they hated before. They’re actually much happier and more productive — about 40% more productive — when they’ve got this AI partner helping them do their work.”

That doesn’t mean the company is getting rid of developers.

“We’re talking about now that we’re more productive, how can we write better quality code?” he said.

That bodes well for the future, he believes.

“One of the things AI can do as a partner to humans, potentially, is lower the barrier to actually being able to participate in building out all these future solutions,” he said. “So, I’m really optimistic as well, if we can frame AI as a human-machine partnership on all we can do.”

Discover more articles and videos now on WTOP’s Road to Responsible AI event page

Federal News Network Logo
Log in to your WTOP account for notifications and alerts customized for you.

Sign up